SpreadGNN: Decentralized Multi-Task Federated Learning for Graph Neural Networks on Molecular Data

نویسندگان

چکیده

Graph Neural Networks (GNNs) are the first choice methods for graph machine learning problems thanks to their ability learn state-of-the-art level representations from graph-structured data. However, centralizing a massive amount of real-world data GNN training is prohibitive due user-side privacy concerns, regulation restrictions, and commercial competition. Federated Learning de-facto standard collaborative models over many distributed edge devices without need centralization. Nevertheless, neural networks in federated setting vaguely defined brings statistical systems challenges. This work proposes SpreadGNN, novel multi-task framework capable operating presence partial labels absence central server time literature. We provide convergence guarantees empirically demonstrate efficacy our on variety non-I.I.D. graph-level molecular property prediction datasets with labels. Our results show that SpreadGNN outperforms trained server-dependent system, even constrained topologies.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Federated Multi-Task Learning

Federated learning poses new statistical and systems challenges in training machinelearning models over distributed networks of devices. In this work, we show thatmulti-task learning is naturally suited to handle the statistical challenges of thissetting, and propose a novel systems-aware optimization method, MOCHA, that isrobust to practical systems issues. Our method and theor...

متن کامل

Rapid adaptation for deep neural networks through multi-task learning

We propose a novel approach to addressing the adaptation effectiveness issue in parameter adaptation for deep neural network (DNN) based acoustic models for automatic speech recognition by adding one or more small auxiliary output layers modeling broad acoustic units, such as mono-phones or tied-state (often called senone) clusters. In scenarios with a limited amount of available adaptation dat...

متن کامل

Multi-task learning deep neural networks for speech feature denoising

Traditional automatic speech recognition (ASR) systems usually get a sharp performance drop when noise presents in speech. To make a robust ASR, we introduce a new model using the multi-task learning deep neural networks (MTL-DNN) to solve the speech denoising task in feature level. In this model, the networks are initialized by pre-training restricted Boltzmann machines (RBM) and fine-tuned by...

متن کامل

Multi-task Neural Networks for QSAR Predictions

Although artificial neural networks have occasionally been used for Quantitative Structure-Activity/Property Relationship (QSAR/QSPR)studies in the past, the literature has of late been dominated by other machine learning techniques such as random forests. However, a variety of new neural net techniques along with successful applications in other domains have renewed interest in network approac...

متن کامل

Multi-Task and Transfer Learning with Recurrent Neural Networks

Being able to obtain a model of a dynamical system is useful for a variety of purposes, e.g. condition monitoring and model-based control. The dynamics of complex technical systems such as gas or wind turbines can be approximated by data driven models, e.g. recurrent neural networks. Such methods have proven to be powerful alternatives to analytical models which are not always available or may ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2022

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v36i6.20643